|
Common Crawl is a nonprofit 501(c)(3) organization that crawls the web and freely provides its archives and datasets to the public. Common Crawl's web archive consists of 145 TB of data from 1.81 billion webpages as of August 2015. 〔(【引用サイトリンク】title=July 2015 Crawl Archive Available )〕 It completes four crawls a year. Common Crawl was founded by Gil Elbaz. Advisors to the non-profit include Peter Norvig and Joi Ito.〔 The organization's crawlers respect nofollow and robots.txt policies. Open source code for processing Common Crawl's data set is publicly available. ==History== Amazon Web Services began hosting Common Crawl's archive through its Public Data Sets program in 2012. The organization began releasing metadata files and the text output of the crawlers alongside .arc files in July of that year. Common Crawl's archives had only included .arc files previously.〔 In December 2012, blekko donated to Common Crawl search engine metadata blekko gathered from crawls it conducted from February to October 2012. The donated data helped Common Crawl "improve its crawl while avoiding spam, porn and the influence of excessive SEO."〔 In 2013, Common Crawl began using Apache Software Foundation's Nutch webcrawler instead of a custom crawler. Common Crawl switched from using .arc files to .warc files with its November 2013 crawl. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Common Crawl」の詳細全文を読む スポンサード リンク
|